On the Almost Everywhere Convergence of Nonparametric Regression Function Estimates by Luc Devroye

نویسنده

  • LUC DEVROYE
چکیده

Let (X, Y), (X1, Y1), .•, (X,, Yn ) be independent identically distributed random vectors from R dxR, and let E(I Y 1 p) < oo for some p > 1 . We wish to estimate the regression function m(x) = E(Y I X = x) by mn(x), a function of x and (X1 , Y1), .•, (X,, Yn ) . For large classes of kernel estimates and nearest neighbor estimates, sufficient conditions are given for E { l m(x) m(x) 0 ) -* 0 as n -* oo, almost all x . No additional conditions are imposed on the distribution of (X, Y) . As a by-product, just assuming the boundedness of Y, the almost sure convergence to O of E { I m(X) m (X) I I Xl , Yl, • • • , X,, Yn } is established for the same estimates. Finally, the weak and strong Bayes risk consistency of the corresponding nonparametric discrimination rules is proved for all possible distributions of the data .

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The uniform convergence of nearest neighbor regression function estimators and their application in optimization

A class of nonparametric regression function estimates generalizing the nearest neighbor estimate of Cover [ 121 is presented. Under various noise conditions, it is shown that the estimates are strongly uniformly consistent. The uniform convergence of the estimates can be exploited to design a simple random search algorithm for the global minimization of the regression function.

متن کامل

An Equivalence Theorem for L, Convergence of the Kernel Regression Estimate*

We show that all modes of convergence in Lt (in probability, almost surely, complete) for the standard kernel regression estimate are equivalent. AMS Subject Classification: Primary 62605.

متن کامل

Necessary and Sufficient Conditions for the Pointwise Convergence of Nearest Neighbor Regression Function Estimates

where (v,1, ..., v,,) is a given probability vector, and (Xt(x), Yl(x)), ..., (X.(x), Y,(x)) is a permutation of (X1, I71) . . . . , (X, , Y,) according to increasing values of IlXi-x[I, x e R a. When [IXi-xll = H X j x l [ but i < j , X i is said to be closer to x than X~. The consistency properties of m, for special choices of the weight vector (v,,, ..., v,,,) are discussed in Cover (1968), ...

متن کامل

A Note on the L1 Consistency of Variable Kernel Estimates

1 . Introduction . Most consistent nonparametric density estimates have a built-in smoothing parameter . Numerous schemes have been proposed (see, e.g ., references found in Rudemo, 1982 ; or Devroye and Penrod, 1984) for selecting the smoothing parameter as a function of the data only (a process called automatization), and for introducing locally adaptable smoothing parameters . In this note, ...

متن کامل

The Consistency of Automatic Kernel Density Estimates by Luc Devroye and Clark S . Penrod

We consider the Parzen-Rosenblatt kernel density estimate on IP d with data-dependent smoothing factor. Sufficient conditions on the asymptotic behavior of the smoothing factor are given under which the estimate is pointwise consistent almost everywhere for all densities f to be estimated . When the smoothing factor is a function only of the sample size n, it is shown that these conditions are ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1981